4 research outputs found

    Falsification-Based Robust Adversarial Reinforcement Learning

    Full text link
    Reinforcement learning (RL) has achieved tremendous progress in solving various sequential decision-making problems, e.g., control tasks in robotics. However, RL methods often fail to generalize to safety-critical scenarios since policies are overfitted to training environments. Previously, robust adversarial reinforcement learning (RARL) was proposed to train an adversarial network that applies disturbances to a system, which improves robustness in test scenarios. A drawback of neural-network-based adversaries is that integrating system requirements without handcrafting sophisticated reward signals is difficult. Safety falsification methods allow one to find a set of initial conditions as well as an input sequence, such that the system violates a given property formulated in temporal logic. In this paper, we propose falsification-based RARL (FRARL), the first generic framework for integrating temporal-logic falsification in adversarial learning to improve policy robustness. With falsification method, we do not need to construct an extra reward function for the adversary. We evaluate our approach on a braking assistance system and an adaptive cruise control system of autonomous vehicles. Experiments show that policies trained with a falsification-based adversary generalize better and show less violation of the safety specification in test scenarios than the ones trained without an adversary or with an adversarial network.Comment: 11 pages, 3 figure

    On the Real-World Adversarial Robustness of Real-Time Semantic Segmentation Models for Autonomous Driving

    No full text
    : The existence of real-world adversarial examples (RWAEs) (commonly in the form of patches) poses a serious threat for the use of deep learning models in safety-critical computer vision tasks such as visual perception in autonomous driving. This article presents an extensive evaluation of the robustness of semantic segmentation (SS) models when attacked with different types of adversarial patches, including digital, simulated, and physical ones. A novel loss function is proposed to improve the capabilities of attackers in inducing a misclassification of pixels. Also, a novel attack strategy is presented to improve the expectation over transformation (EOT) method for placing a patch in the scene. Finally, a state-of-the-art method for detecting adversarial patch is first extended to cope with SS models, then improved to obtain real-time performance, and eventually evaluated in real-world scenarios. Experimental results reveal that even though the adversarial effect is visible with both digital and real-world attacks, its impact is often spatially confined to areas of the image around the patch. This opens to further questions about the spatial robustness of real-time SS models

    A Machine Learning Based Approach to Application Landscape Documentation

    No full text
    Part 2: Model DerivationInternational audienceIn the era of digitalization, IT landscapes keep growing along with complexity and dependencies. This amplifies the need to determine the current elements of an IT landscape for the management and planning of IT landscapes as well as for failure analysis. The field of enterprise architecture documentation sought for more than a decade for solutions to minimize the manual effort to build enterprise architecture models or automation. We summarize the approaches presented in the last decade in a literature survey. Moreover, we present a novel, machine-learning based approach to detect and to identify applications in an IT landscape
    corecore